- Research
- Core Facilities
- Institutional Core Facilities
- Research Computing and Bioinformatics Core Facility
Research Computing and Bioinformatics Core Facility
Resources
Welcome to the University of Mississippi Medical Center's High-Performance Computing Cluster named Cypress Computing Cluster (C3). This page is designed for employees affiliated with the University of Mississippi Medical Center, who need access to computing resources for research, high-performance tasks, and data analysis.
Here you can find detailed information about C3's technical specifications and usage guidelines. Please look over our HPCC overview and see what it can offer and how it could possibly support your work.
To request access to C3, please complete the registration form by clicking the button below. Your submission will be reviewed by C3's administration team, and you will be notified once you have been approved and set up.
Registration FormCypress Cluster Overview:
- 15 Compute nodes
- 1232 Logical CPU cores (616 Physical CPU cores)
- 15 Tb of memory (1Tb per compute node)
- 275 Tb of NFS Storage (Quota: 1Tb per user*)
- Job scheduling via SLURM
C3 is a cluster of Cisco servers with fifteen compute nodes, providing a total of 616 physical cores and 1232 logical CPU cores. Three of the compute nodes are equipped with Intel Xeon Gold 6238R CPUs, each of these providing 112 logical CPUs (28 cores per socket, 2 sockets per node, with Hyper-Threading). Ten of the Cypress compute nodes use Intel Xeon Gold 6544Y CPUs, each of these providing 64 logical CPUs (16 cores per socket, 2 sockets per node, with Hyper-Threading). The remaining two C3 compute nodes are GPU compute nodes powered by Intel Xeon Platinum 8562Y+ CPUs, each providing 128 logical CPUs (32 cores per socket, 2 sockets per node, with Hyper-Threading) and equipped with one NVIDIA L40S GPU. The Nvidia L40S features 48 GB of high-bandwidth memory. The C3 is also equipped with 15 terabytes of memory and 275 terabytes of NFS network storage.
The HPC environment leverages the Simple Linux Utility for Resource Management (SLURM) software to efficiently manage parallel jobs, allocate resources across compute nodes, and maintains historical performance metrics. The SLURM manager monitors resource availability on each compute node and schedules jobs accordingly. Users can compose, edit, and submit jobs, as well as track and manage their submissions through this system.
Software Available
The following software is currently available on the cluster:
annovar | conda | mirdeep2 | salmon | tophat |
bamtools | FastQC | mummer | samtools | TrimGalore |
bamUtil | fastq-tools | mview | seqkit | Trimmomatic |
bbmap | gatk | ncbi-blast | seqtk | trinity |
bcftools | go | ncbi-cxx-toolkit-public | shiny-server | ViennaRNA |
bcl2fastq | hisat2 | nextflow | Singularity | |
beagle-lib | hmmer | pangolin | slurm | |
beast2 | htslib | phylip | snpEff | |
bedtools | igv | phyml | SOAPdenovo-Trans | |
bedtools2 | impute | picard | spaceranger-2.0.0 | |
Bismark | jvarkit | primer3 | squid | |
bowtie | kaiju | prokka | sratoolkit | |
bowtie2 | kraken2 | randfold_src | star | |
bwa | libStatGen | rapidNJ | stringtie | |
cellranger-7.0.0 | meme | raxml | tiny | |
cellranger-arc-2.0.2 | minialign | RSEM | treetime |
If you have any questions, please contact us at cypress@umc.edu.